Production readyAI agent framework Build your AI agent Train with your docs Upload pdf, txt, markdown,JSON, web pages. Interact with the world Easily connect your agentto external APIs and apps. Choose your models Use commercial or openLLMs and embedders. Plug & Play 100% dockerized,with live reload. Easy to Extend One-click install plugins from the community […]
Claim this tool to publish updates, news and respond to users.
Sign in to claim ownership
Sign InCheshire Cat AI is a production-ready, open-source framework designed for building and deploying autonomous AI agents that can interact with data and external systems. Its core value proposition is providing a robust, containerized platform where developers can create conversational AI assistants capable of reasoning, accessing custom knowledge bases, and performing actions through API integrations, moving beyond simple chatbots to true agentic systems.
Key features: The framework allows you to train your agent on your own documents by uploading various file types including PDFs, text files, markdown, JSON, and web page content, creating a long-term memory. It enables the agent to interact with the world through easy connections to external APIs and applications, allowing it to fetch data or trigger actions. Users have the flexibility to choose their underlying language models and embedders, supporting both commercial APIs and open-source LLMs. The entire system is 100% dockerized for plug-and-play deployment and includes a live reload feature for development. Extensibility is a priority, with a one-click plugin system to install community-built add-ons for additional functionalities.
What sets Cheshire Cat AI apart is its strong focus on being a developer-friendly, modular platform for creating persistent AI agents, not just chatbots. It is built with a plugin architecture that separates core logic from integrations, making it highly customizable. Technically, it handles conversation history, tool usage, and memory management (episodic and declarative) under the hood. Its ability to work seamlessly with both proprietary models like OpenAI's GPT and open-source models via Ollama or local inference provides significant flexibility and cost-control for different project scales.
Ideal for developers, engineering teams, and businesses looking to build customized AI assistants for customer support, internal knowledge management, process automation, or as a backbone for more complex AI applications. Specific use cases include creating a support agent trained on product manuals, an internal HR assistant that queries company policies, or a research agent that summarizes web content and saves notes to a database. It is particularly valuable in industries like IT services, SaaS, education, and any sector requiring tailored AI interactions with proprietary data.
Pricing for the core open-source framework is free, with costs incurred based on the chosen LLM providers (e.g., OpenAI API costs) or infrastructure. The team may offer managed cloud services or enterprise support tiers in the future, but the core platform remains freely accessible for self-hosting and customization.